Skip to main content

The risks and opportunities of AI on humanitarian action

Wednesday 15 – Friday 17 May 2024 I WP3368

Female,Doctor,Weighting,Cute,Baby,In,Clinic.,Aleppo,,Syria,October

In partnership with The Foreign, Commonwealth and Development Office

AI will have profound implications for humanitarians.  If its potential is realised, it could help address chronic issues that have hindered effective and accountable humanitarian action.  AI could transform how humanitarian actors coordinate, how decision-makers access critical information, or how accountability is provided to communities affected by humanitarian disasters.

Collaboration amongst humanitarian actors and with industry experts is key to realising a positive vision for AI.  The inevitable use of AI will increasingly shape all parts of the humanitarian system as organisations rely on AI-powered systems to drive efficiency gains.  Many challenges will therefore be shared and tackling them jointly will deliver better outcomes.

The recent UK AI Safety Summit provided a forum for industry and governments to come together to build consensus on how AI can safely be used for good. 

While the application of AI on humanitarian action is still in a formative period, it is important to shape the use and development of AI to be consistent with humanitarian ethics, principles, and standards and determine pathways for further collaboration.

Executive summary

This Wilton Park meeting in May 2024 brought together participants from local NGOs, INGOs, industry, academic institutions, private sector, and governments to discuss the impact of AI in humanitarian contexts, how it can be harnessed, and how potential harms to vulnerable populations could be addressed. Throughout the discussions, various recurrent themes emerged which should help frame forthcoming conversations on AI and humanitarian action

“AI will fundamentally alter every aspect of our lives. Indeed, it is already having an impact.”

AI potential

AI carries huge potential for the effective delivery of humanitarian aid to greater numbers of people at a time when humanitarian needs are growing, and resources are unable to meet the current demand.

However, the risks and potential harms that AI can bring are cause for alarm including exacerbation of conflict and inequalities, erosion of trust in information, governance processes, and the humanitarian system itself, and undermining of social cohesion.

Humanitarians will need to proactively chart the right approach if they are to take advantage of the benefits, while comprehensively addressing the risk of harm.

The importance of collaboration

Collaboration was a key theme throughout the meeting. Sharing knowledge and learning about AI applications, and ways of working with tech companies, governments and communities was seen as increasingly essential.  Participants were keen to consolidate and use case studies and share experiences including failures.  They worked together as partners with a common goal to classify lessons learned, avoid duplication, and foster collaboration.  As well as harnessing lessons learned, collaboration would enable a more strategic approach to overcoming challenges which could shape a broader sectoral approach. Socialising definitions, building upon existing resources, and creating shared platforms for collective knowledge are relatively easy actions to implement when organisations choose to work together.

“We are the ones figuring out what we’re going to do with these incredibly powerful and transformative tools.”

Greater transparency and collaboration can also mitigate a trend of increased polarisation on AI, sometimes characterised as a ‘silver bullet’, while others felt a need to ‘close the gate’ on the runaway advances of AI. Charting the course to a future that takes advantage of technology safely to support and empower the most vulnerable people requires careful reflection of a range of perspectives. The humanitarian sector and technology groups need to convene more productive conversations, moving from raising broad concerns to taking practical steps.

A people-centred approach

The voice, participation, and empowerment of crisis- and conflict-affected populations is paramount for effective humanitarian action. Local action and local innovation are highly valued in the humanitarian sector, while at the same time being notoriously difficult to support and implement.

The co-creation of AI (participatory AI) with front line responders, tech developers, local actors, and communities is important to meet real needs and to mitigate risks and harms.  A people-centred approach can help identify the most appropriate uses of AI, reduce bias and harm from AI systems, remove culturally insensitive inputs, and specify guardrails for when, where, and with whom AI tools are appropriate. This approach, ideally delivered through local talent, can also enhance trust among users.

“This is about people, and people must be at the centre of our approaches and conversations.”

A strong business model for AI in humanitarian contexts would prioritise locally-identified needs and humanitarian principles and envision how AI could solve problems alongside identifying underlying economic incentives, and sustainability concerns.

However, implementing this approach can be challenging.

Digital public humanitarian infrastructure

A digital public infrastructure for high quality data sharing and interoperability is critically important. Systems and infrastructure for using AI in humanitarian settings are necessary and should be prioritised with long-term investment to ensure viability, sustainability, safety, and effectiveness long into the future.

High quality data is imperative for effective AI, and yet obtaining necessary local data can be challenging; it can be is expensive, can require community knowledge and data sets that do not exist.

Data selection and interoperability are crucial, and standardisation of data through repositories such as the Humanitarian Data Exchange (HDX) could drive interoperability and support better understandings of the limitations of data.

Concerns about data are multiple and interlinked, especially in the context of vulnerable communities that lack AI literacy. Consent is insufficient in the context of the risks of AI when implications of ownership and use of data by hostile actors pose serious threats to security.

“There is vast potential for this technology to truly transform our collective humanitarian work.”

Safety, ethics, and governance

When humanitarian actors experiment with AI tools and models, action should be taken responsibly and be consistent with humanitarian principles.

Humanitarians can look to other sectors to guide the assurance of safety. These include new standards that support the safe development of AI tools and a range of evaluation and assurance approaches. The humanitarian sector needs to consider how new regulations on AI fit with other existing national, regional, and global regulations and remain firmly within the context of vulnerable people who need humanitarian aid.

Managing risks within programmes and approaches, and in relationships with actors with different value systems, requires an examination of ethics and applications of existing rules and regulations. Moral underpinnings for humanitarian action have produced digital ethics in the past and knowledge from this can be harnessed and applied to AI.

The private and humanitarian sectors are fraught with tension in relation to values, principles, and motivations around AI. Both will play an essential part in the safe use of AI in humanitarian crises. Developing relationships so that AI tools are shaped by expertise from both groups will become increasingly important.

“We are at a pivotal point for steering the world towards the responsible and inclusive use of AI.”

AI Capacity

A central question for humanitarians is how to strengthen capacity on AI.  A better understanding of opportunities and risks across different dimensions is needed.

Senior decision-makers, technical staff, and operational staff within organisations (local and national NGOs, governments, and international organisations) need greater understanding of both how their organisation can deliver differently and what should not change.

This should be accompanied by a requirement for technical experts to enhance their understanding of humanitarian action and principles.  As well as tech organisations, humanitarians need to work with academics, regulators, and policy makers more generally to ensure that humanitarian priorities contribute to the wider conversations.

Next

The state of play

Want to find out more?

Sign up to our newsletter